In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or nominal category on the basis of some qualitative property. In computer science and some branches of mathematics, categorical variables are referred to as enumerations or enumerated types. Commonly (though not in this article), each of the possible values of a categorical variable is referred to as a level. The probability distribution associated with a random variable categorical variable is called a categorical distribution.
Categorical data is the statistical data type consisting of categorical variables or of data that has been converted into that form, for example as grouped data. More specifically, categorical data may derive from observations made of qualitative data that are summarised as counts or , or from observations of quantitative data grouped within given intervals. Often, purely categorical data are summarised in the form of a contingency table. However, particularly when considering data analysis, it is common to use the term "categorical data" to apply to data sets that, while containing some categorical variables, may also contain non-categorical variables. Ordinal data have a meaningful ordering, while Nominal variable have no meaningful ordering.
A categorical variable that can take on exactly two values is termed a binary variable or a dichotomous variable; an important special case is the Bernoulli variable. Categorical variables with more than two possible values are called polytomous variables; categorical variables are often assumed to be polytomous unless otherwise specified. Discretization is treating continuous data as if it were categorical. Dichotomization is treating continuous data or polytomous variables as if they were binary variables. Regression analysis often treats category membership with one or more quantitative dummy variables.
As a result, the central tendency of a set of categorical variables is given by its mode; neither the mean nor the median can be defined. As an example, given a set of people, we can consider the set of categorical variables corresponding to their last names. We can consider operations such as equivalence (whether two people have the same last name), set membership (whether a person has a name in a given list), counting (how many people have a given last name), or finding the mode (which name occurs most often). However, we cannot meaningfully compute the "sum" of Smith + Johnson, or ask whether Smith is "less than" or "greater than" Johnson. As a result, we cannot meaningfully ask what the "average name" (the mean) or the "middle-most name" (the median) is in a set of names.
This ignores the concept of alphabetical order, which is a property that is not inherent in the names themselves, but in the way we construct the labels. For example, if we write the names in Cyrillic and consider the Cyrillic ordering of letters, we might get a different result of evaluating "Smith < Johnson" than if we write the names in the standard Latin alphabet; and if we write the names in Chinese characters, we cannot meaningfully evaluate "Smith < Johnson" at all, because no consistent ordering is defined for such characters. However, if we do consider the names as written, e.g., in the Latin alphabet, and define an ordering corresponding to standard alphabetical order, then we have effectively converted them into defined on an ordinal scale.
Categorical variables that have only two possible outcomes (e.g., "yes" vs. "no" or "success" vs. "failure") are known as binary variables (or Bernoulli variables). Because of their importance, these variables are often considered a separate category, with a separate distribution (the Bernoulli distribution) and separate regression models (logistic regression, probit regression, etc.). As a result, the term "categorical variable" is often reserved for cases with 3 or more outcomes, sometimes termed a multi-way variable in opposition to a binary variable.
It is also possible to consider categorical variables where the number of categories is not fixed in advance. As an example, for a categorical variable describing a particular word, we might not know in advance the size of the vocabulary, and we would like to allow for the possibility of encountering words that we have not already seen. Standard statistical models, such as those involving the categorical distribution and multinomial logistic regression, assume that the number of categories is known in advance, and changing the number of categories on the fly is tricky. In such cases, more advanced techniques must be used. An example is the Dirichlet process, which falls in the realm of nonparametric statistics. In such a case, it is logically assumed that an infinite number of categories exist, but at any one time most of them (in fact, all but a finite number) have never been seen. All formulas are phrased in terms of the number of categories actually seen so far rather than the (infinite) total number of potential categories in existence, and methods are created for incremental updating of statistical distributions, including adding "new" categories.
There are three main coding systems typically used in the analysis of categorical variables in regression: dummy coding, effects coding, and contrast coding. The regression equation takes the form of Y = bX + a, where b is the slope and gives the weight empirically assigned to an explanator, X is the explanatory variable, and a is the Y-intercept, and these values take on different meanings based on the coding system used. The choice of coding system does not affect the F statistic or R2 statistics. However, one chooses a coding system based on the comparison of interest since the interpretation of b values will vary.
In dummy coding, the reference group is assigned a value of 0 for each code variable, the group of interest for comparison to the reference group is assigned a value of 1 for its specified code variable, while all other groups are assigned 0 for that particular code variable.
The b values should be interpreted such that the experimental group is being compared against the control group. Therefore, yielding a negative b value would entail the experimental group have scored less than the control group on the dependent variable. To illustrate this, suppose that we are measuring optimism among several nationalities and we have decided that French people would serve as a useful control. If we are comparing them against Italians, and we observe a negative b value, this would suggest Italians obtain lower optimism scores on average.
The following table is an example of dummy coding with French as the control group and C1, C2, and C3 respectively being the codes for Italian, German, and Other (neither French nor Italian nor German):
Effects coding can either be weighted or unweighted. Weighted effects coding is simply calculating a weighted grand mean, thus taking into account the sample size in each variable. This is most appropriate in situations where the sample is representative of the population in question. Unweighted effects coding is most appropriate in situations where differences in sample size are the result of incidental factors. The interpretation of b is different for each: in unweighted effects coding b is the difference between the mean of the experimental group and the grand mean, whereas in the weighted situation it is the mean of the experimental group minus the weighted grand mean.
In effects coding, we code the group of interest with a 1, just as we would for dummy coding. The principal difference is that we code −1 for the group we are least interested in. Since we continue to use a g - 1 coding scheme, it is in fact the −1 coded group that will not produce data, hence the fact that we are least interested in that group. A code of 0 is assigned to all other groups.
The b values should be interpreted such that the experimental group is being compared against the mean of all groups combined (or weighted grand mean in the case of weighted effects coding). Therefore, yielding a negative b value would entail the coded group as having scored less than the mean of all groups on the dependent variable. Using our previous example of optimism scores among nationalities, if the group of interest is Italians, observing a negative b value suggest they obtain a lower optimism score.
The following table is an example of effects coding with Other as the group of least interest.
Certain differences emerge when we compare our a priori coefficients between ANOVA and regression. Unlike when used in ANOVA, where it is at the researcher's discretion whether they choose coefficient values that are either Orthogonality or non-orthogonal, in regression, it is essential that the coefficient values assigned in contrast coding be orthogonal. Furthermore, in regression, coefficient values must be either in fractional or decimal form. They cannot take on interval values.
The construction of contrast codes is restricted by three rules:
Violating rule 2 produces accurate R2 and F values, indicating that we would reach the same conclusions about whether or not there is a significant difference; however, we can no longer interpret the b values as a mean difference.
To illustrate the construction of contrast codes consider the following table. Coefficients were chosen to illustrate our a priori hypotheses: Hypothesis 1: French and Italian persons will score higher on optimism than Germans (French = +0.33, Italian = +0.33, German = −0.66). This is illustrated through assigning the same coefficient to the French and Italian categories and a different one to the Germans. The signs assigned indicate the direction of the relationship (hence giving Germans a negative sign is indicative of their lower hypothesized optimism scores). Hypothesis 2: French and Italians are expected to differ on their optimism scores (French = +0.50, Italian = −0.50, German = 0). Here, assigning a zero value to Germans demonstrates their non-inclusion in the analysis of this hypothesis. Again, the signs assigned are indicative of the proposed relationship.
Dummy coding
C3 0 0 0 1
Effects coding
C3 1 0 0 −1
Contrast coding
C2 +0.50 −0.50 0
Nonsense coding
Embeddings
Interactions
Categorical by categorical variable interactions
Categorical by continuous variable interactions
See also
Further reading
|
|